17 research outputs found

    Speaker Prediction based on Head Orientations

    Get PDF

    Verbal behavior of the more and the less influential meeting participant

    Get PDF
    Argumentation can be defined as a social, intellectual, verbal activity that serves to justify or to refute an opinion, consisting of a constellation of statements and\ud that is directed towards obtaining the approbation of an audience. It is not unlikely to expect a relationship between the phenomena of argumentation and influence. The aim of this paper is to test the strength of the relationship between the way that people behave in a discussion and their perceived level of influence on the basis of some empirical grounds. Using the data sources that were collected from the AMI corpus for experiments in the areas of argumentation, dialogue-act, and influence research statistical dependencies and (cor)relations between the tags are mined for possible relationships. We report about the relationships that were found and how they can be used to construct a tentative profile of how influential participants, as experienced by actual meeting participants, distinguish themselves from less influential participants

    Virtual Meeting Rooms: From Observation to Simulation

    Get PDF
    Virtual meeting rooms are used for simulation of real meeting behavior and can show how people behave, how they gesture, move their heads, bodies, their gaze behavior during conversations. They are used for visualising models of meeting behavior, and they can be used for the evaluation of these models. They are also used to show the effects of controlling certain parameters on the behavior and in experiments to see what the effect is on communication when various channels of information - speech, gaze, gesture, posture - are switched off or manipulated in other ways. The paper presents the various stages in the development of a virtual meeting room as well and illustrates its uses by presenting some results of experiments to see whether human judges can induce conversational roles in a virtual meeting situation when they only see the head movements of participants in the meeting

    Virtual Meeting Rooms: From Observation to Simulation

    Get PDF
    Much working time is spent in meetings and, as a consequence, meetings have become the subject of multidisciplinary research. Virtual Meeting Rooms (VMRs) are 3D virtual replicas of meeting rooms, where various modalities such as speech, gaze, distance, gestures and facial expressions can be controlled. This allows VMRs to be used to improve remote meeting participation, to visualize multimedia data and as an instrument for research into social interaction in meetings. This paper describes how these three uses can be realized in a VMR. We describe the process from observation through annotation to simulation and a model that describes the relations between the annotated features of verbal and non-verbal conversational behavior.\ud As an example of social perception research in the VMR, we describe an experiment to assess human observersā€™ accuracy for head orientation

    Differences in head orientation behavior for speakers and listeners: An experiment in a virtual environment

    Get PDF
    An experiment was conducted to investigate whether human observers use knowledge of the differences in focus of attention in multiparty interaction to identify the speaker amongst the meeting participants. A virtual environment was used to have good stimulus control. Head orientations were displayed as the only cue for focus attention. The orientations were derived from a corpus of tracked head movements. We present some properties of the relation between head orientations and speakerā€“listener status, as found in the corpus. With respect to the experiment, it appears that people use knowledge of the patterns in focus of attention to distinguish the speaker from the listeners. However, the human speaker identification results were rather low. Head orientations (or focus of attention) alone do not provide a sufficient cue for reliable identification of the speaker in a multiparty setting

    Dialogue-act tagging using smart feature selection: results on multiple corpora

    No full text
    This paper presents an overview of our on-going work on dialogueact classification. Results are presented on the ICSI, Switchboard, and on a selection of the AMI corpus, setting a baseline for forthcoming research. For these corpora the best accuracy scores obtained are 89.27%, 65.68% and 59.76%, respectively. We introduce a smart compression technique for feature selection and compare the performance from a subset of the AMI transcriptions with AMI-ASR output for the same subset

    Pro-active Meeting Assistants: Attention Please!

    No full text
    This paper gives an overview of pro-active meeting assistants, what they are and when they can be useful. We explain how to develop such assistants with respect to requirement definitions and elaborate on a set of Wizard of Oz experiments, aiming to find out in which form a meeting assistant should operate to be accepted by participants and whether the meeting effectiveness and efficiency can be improved by an assistant at all. This paper gives an overview of pro-active meeting assistants, what they are and when they can be useful. We explain how to develop such assistants with respect to requirement definitions and elaborate on a set of Wizard of Oz experiments, aiming to find out in which form a meeting assistant should operate to be accepted by participants and whether the meeting effectiveness and efficiency can be improved by an assistant at all

    NSC21515

    Get PDF
    This paper presents a virtual rap dancer that is able to dance to the beat of music coming in from music recordings, beats obtained from music, voice or other input through a microphone, motion beats detected in the video stream of a human dancer, or motions detected from a dance mat. The rap dancerā€™s moves are generated from a lexicon that was derived manually from the analysis of the video clips of rap songs performed by various rappers. The system allows for adaptation of the moves in the lexicon on the basis of style parameters. The rap dancer invites a user to dance along with the music
    corecore